Goto

Collaborating Authors

 secure stochastic convex optimization



Optimal Query Complexity of Secure Stochastic Convex Optimization

Neural Information Processing Systems

We study the \emph{secure} stochastic convex optimization problem: a learner aims to learn the optimal point of a convex function through sequentially querying a (stochastic) gradient oracle, in the meantime, there exists an adversary who aims to free-ride and infer the learning outcome of the learner from observing the learner's queries. The adversary observes only the points of the queries but not the feedback from the oracle. The goal of the learner is to optimize the accuracy, i.e., obtaining an accurate estimate of the optimal point, while securing her privacy, i.e., making it difficult for the adversary to infer the optimal point. We formally quantify this tradeoff between learner's accuracy and privacy and characterize the lower and upper bounds on the learner's query complexity as a function of desired levels of accuracy and privacy. For the analysis of lower bounds, we provide a general template based on information theoretical analysis and then tailor the template to several families of problems, including stochastic convex optimization and (noisy) binary search. We also present a generic secure learning protocol that achieves the matching upper bound up to logarithmic factors.


Review for NeurIPS paper: Optimal Query Complexity of Secure Stochastic Convex Optimization

Neural Information Processing Systems

In terms of the presentation, the paper constantly switches in terminology, both referring to "secure" and "private." Given that by now differential privacy has been established as a notion of formal privacy, I believe it is better to refer to this problem to "secure" optimization. For example, in page 2, line 49, classic bounds are referred as "non-private." I think it would be better to exclusively refer to "secure" in this definitions. However, they are clearly vacuous for the classical "non-secure" case.


Review for NeurIPS paper: Optimal Query Complexity of Secure Stochastic Convex Optimization

Neural Information Processing Systems

This paper was carefully reviewed and discussed by our reviewer panel. The consensus was that this is nice work, the rebuttal had some sway, and the paper can be published in NeurIPS this year. But please do take into account the detailed comments of the reviewers when putting together your camera-ready version.


Optimal Query Complexity of Secure Stochastic Convex Optimization

Neural Information Processing Systems

We study the \emph{secure} stochastic convex optimization problem: a learner aims to learn the optimal point of a convex function through sequentially querying a (stochastic) gradient oracle, in the meantime, there exists an adversary who aims to free-ride and infer the learning outcome of the learner from observing the learner's queries. The adversary observes only the points of the queries but not the feedback from the oracle. The goal of the learner is to optimize the accuracy, i.e., obtaining an accurate estimate of the optimal point, while securing her privacy, i.e., making it difficult for the adversary to infer the optimal point. We formally quantify this tradeoff between learner's accuracy and privacy and characterize the lower and upper bounds on the learner's query complexity as a function of desired levels of accuracy and privacy. For the analysis of lower bounds, we provide a general template based on information theoretical analysis and then tailor the template to several families of problems, including stochastic convex optimization and (noisy) binary search.